107 research outputs found

    Acoustic Characterization of Flame Blowout Phenomenon

    Get PDF
    Combustor blowout is a very serious concern in modern land-based and aircraft engine combustors. The ability to sense blowout precursors can provide significant payoffs in engine reliability and life. The objective of this work is to characterize the blowout phenomenon and develop a sensing methodology which can detect and assess the proximity of a combustor to blowout by monitoring its acoustic signature, thus providing early warning before the actual blowout of the combustor. The first part of the work examines the blowout phenomenon in a piloted jet burner. As blowout was approached, the flame detached from one side of the burner and showed increased flame tip fluctuations, resulting in an increase in low frequency acoustics. Work was then focused on swirling combustion systems. Close to blowout, localized extinction/re-ignition events were observed, which manifested as bursts in the acoustic signal. These events increased in number and duration as the combustor approached blowout, resulting an increase in low frequency acoustics. A variety of spectral, wavelet and thresholding based approaches were developed to detect precursors to blowout. The third part of the study focused on a bluff body burner. It characterized the underlying flame dynamics near blowout in greater detail and related it to the observed acoustic emissions. Vorticity was found to play a significant role in the flame dynamics. The flame passed through two distinct stages prior to blowout. The first was associated with momentary strain levels that exceed the flames extinction strain rate, leading to flame holes. The second was due to large scale alteration of the fluid dynamics in the bluff body wake, leading to violent flapping of the flame front and even larger straining of the flame. This led to low frequency acoustic oscillations, of the order of von Karman vortex shedding. This manifested as an abrupt increase in combustion noise spectra at 40-100 Hz very close to blowout. Finally, work was also done to improve the robustness of lean blowout detection by developing integration techniques that combined data from acoustic and optical sensors.Ph.D.Committee Chair: Dr. Tim Lieuwen; Committee Member: Dr. B. T. Zinn; Committee Member: Dr. Jeff Jagoda; Committee Member: Dr. Jerry Seitzman; Committee Member: Dr. Marios Soterio

    Cross-language Information Retrieval

    Full text link
    Two key assumptions shape the usual view of ranked retrieval: (1) that the searcher can choose words for their query that might appear in the documents that they wish to see, and (2) that ranking retrieved documents will suffice because the searcher will be able to recognize those which they wished to find. When the documents to be searched are in a language not known by the searcher, neither assumption is true. In such cases, Cross-Language Information Retrieval (CLIR) is needed. This chapter reviews the state of the art for CLIR and outlines some open research questions.Comment: 49 pages, 0 figure

    Neural Task Programming: Learning to Generalize Across Hierarchical Tasks

    Full text link
    In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment. We validate our method in three robot manipulation tasks. NTP achieves strong generalization across sequential tasks that exhibit hierarchal and compositional structures. The experimental results show that NTP learns to generalize well to- wards unseen tasks with increasing lengths, variable topologies, and changing objectives.Comment: ICRA 201

    Behavior Retrieval: Few-Shot Imitation Learning by Querying Unlabeled Datasets

    Full text link
    Enabling robots to learn novel visuomotor skills in a data-efficient manner remains an unsolved problem with myriad challenges. A popular paradigm for tackling this problem is through leveraging large unlabeled datasets that have many behaviors in them and then adapting a policy to a specific task using a small amount of task-specific human supervision (i.e. interventions or demonstrations). However, how best to leverage the narrow task-specific supervision and balance it with offline data remains an open question. Our key insight in this work is that task-specific data not only provides new data for an agent to train on but can also inform the type of prior data the agent should use for learning. Concretely, we propose a simple approach that uses a small amount of downstream expert data to selectively query relevant behaviors from an offline, unlabeled dataset (including many sub-optimal behaviors). The agent is then jointly trained on the expert and queried data. We observe that our method learns to query only the relevant transitions to the task, filtering out sub-optimal or task-irrelevant data. By doing so, it is able to learn more effectively from the mix of task-specific and offline data compared to naively mixing the data or only using the task-specific data. Furthermore, we find that our simple querying approach outperforms more complex goal-conditioned methods by 20% across simulated and real robotic manipulation tasks from images. See https://sites.google.com/view/behaviorretrieval for videos and code

    Example-Driven Model-Based Reinforcement Learning for Solving Long-Horizon Visuomotor Tasks

    Full text link
    In this paper, we study the problem of learning a repertoire of low-level skills from raw images that can be sequenced to complete long-horizon visuomotor tasks. Reinforcement learning (RL) is a promising approach for acquiring short-horizon skills autonomously. However, the focus of RL algorithms has largely been on the success of those individual skills, more so than learning and grounding a large repertoire of skills that can be sequenced to complete extended multi-stage tasks. The latter demands robustness and persistence, as errors in skills can compound over time, and may require the robot to have a number of primitive skills in its repertoire, rather than just one. To this end, we introduce EMBER, a model-based RL method for learning primitive skills that are suitable for completing long-horizon visuomotor tasks. EMBER learns and plans using a learned model, critic, and success classifier, where the success classifier serves both as a reward function for RL and as a grounding mechanism to continuously detect if the robot should retry a skill when unsuccessful or under perturbations. Further, the learned model is task-agnostic and trained using data from all skills, enabling the robot to efficiently learn a number of distinct primitives. These visuomotor primitive skills and their associated pre- and post-conditions can then be directly combined with off-the-shelf symbolic planners to complete long-horizon tasks. On a Franka Emika robot arm, we find that EMBER enables the robot to complete three long-horizon visuomotor tasks at 85% success rate, such as organizing an office desk, a file cabinet, and drawers, which require sequencing up to 12 skills, involve 14 unique learned primitives, and demand generalization to novel objects.Comment: Equal advising and contribution for last two author
    corecore